Goto

Collaborating Authors

 alignment problem



ResAlignNet: A Data-Driven Approach for INS/DVL Alignment

Damari, Guy, Klein, Itzik

arXiv.org Artificial Intelligence

Abstract--Autonomous underwater vehicles rely on precise navigation systems that combine the inertial navigation system and the Doppler velocity log for successful missions in challenging environments where satellite navigation is unavailable. The effectiveness of this integration critically depends on accurate alignment between the sensor reference frames. Standard model-based alignment methods between these sensor systems suffer from lengthy convergence times, dependence on prescribed motion patterns, and reliance on external aiding sensors, significantly limiting operational flexibility. T o address these limitations, this paper presents ResAlignNet, a data-driven approach using the 1D ResNet-18 architecture that transforms the alignment problem into deep neural network optimization, operating as an in-situ solution that requires only sensors on board without external positioning aids or complex vehicle maneuvers, while achieving rapid convergence in seconds. Additionally, the approach demonstrates the learning capabilities of Sim2Real transfer, enabling training in synthetic data while deploying in operational sensor measurements. Experimental validation using the Snapir autonomous underwater vehicle demonstrates that ResAlignNet achieves alignment accuracy within 0.8 using only 25 seconds of data collection, representing a 65% reduction in convergence time compared to standard velocity-based methods. The trajectory-independent solution eliminates motion pattern requirements and enables immediate vehicle deployment without lengthy pre-mission procedures, advancing underwater navigation capabilities through robust sensor-agnostic alignment that scales across different operational scenarios and sensor specifications. Underwater navigation systems are critical for a wide range of marine applications, particularly autonomous underwater vehicles (AUVs) operating in challenging environments where global navigation satellite systems (GNSSs) are unavailable [1].


Y our contrastive learning problem is secretly a distribution alignment problem

Neural Information Processing Systems

Intuitively, by using more information from the distribution of latents, our approach allows a more distribution-aware manipulation of the relationships within augmented sample sets.


Getting In Contract with Large Language Models -- An Agency Theory Perspective On Large Language Model Alignment

Kaltenpoth, Sascha, Müller, Oliver

arXiv.org Artificial Intelligence

Adopting Large language models (LLMs) in organizations potentially revolutionizes our lives and work. However, they can generate off-topic, discriminating, or harmful content. This AI alignment problem often stems from misspecifications during the LLM adoption, unnoticed by the principal due to the LLM's black-box nature. While various research disciplines investigated AI alignment, they neither address the information asymmetries between organizational adopters and black-box LLM agents nor consider organizational AI adoption processes. Therefore, we propose LLM ATLAS (LLM Agency Theory-Led Alignment Strategy) a conceptual framework grounded in agency (contract) theory, to mitigate alignment problems during organizational LLM adoption. We conduct a conceptual literature analysis using the organizational LLM adoption phases and the agency theory as concepts. Our approach results in (1) providing an extended literature analysis process specific to AI alignment methods during organizational LLM adoption and (2) providing a first LLM alignment problem-solution space.


Normative Conflicts and Shallow AI Alignment

Millière, Raphaël

arXiv.org Artificial Intelligence

The progress of AI systems such as large language models (LLMs) raises increasingly pressing concerns about their safe deployment. This paper examines the value alignment problem for LLMs, arguing that current alignment strategies are fundamentally inadequate to prevent misuse. Despite ongoing efforts to instill norms such as helpfulness, honesty, and harmlessness in LLMs through fine-tuning based on human preferences, they remain vulnerable to adversarial attacks that exploit conflicts between these norms. I argue that this vulnerability reflects a fundamental limitation of existing alignment methods: they reinforce shallow behavioral dispositions rather than endowing LLMs with a genuine capacity for normative deliberation. Drawing from on research in moral psychology, I show how humans' ability to engage in deliberative reasoning enhances their resilience against similar adversarial tactics. LLMs, by contrast, lack a robust capacity to detect and rationally resolve normative conflicts, leaving them susceptible to manipulation; even recent advances in reasoning-focused LLMs have not addressed this vulnerability. This ``shallow alignment'' problem carries significant implications for AI safety and regulation, suggesting that current approaches are insufficient for mitigating potential harms posed by increasingly capable AI systems.


Unpacking the Flaws of Techbro Dreams of the Future

Mother Jones

Cutaway view of a fictional space colony concept painted by artist Rick Guidice as part of a NASA art program in the 1970s. This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Elon Musk once joked: "I would like to die on Mars. Musk is, in fact, deadly serious about colonizing the Red Planet. Part of his motivation is the idea of having a "back-up" planet in case some future catastrophe renders the Earth uninhabitable. Musk has suggested that a million people may be calling Mars home by 2050 -- and he's hardly alone in his enthusiasm. Venture capitalist Marc Andreessen believes the world can easily support 50 billion people, and more than that once we settle other planets. And Jeff Bezos has spoken of exploiting the resources of the moon and the asteroids to build giant space stations. "I would love to see a trillion humans living in the solar system," he has said. Not so fast, cautions science journalist Adam Becker.


Where has the left's technological audacity gone? Leigh Phillips

The Guardian

Techno-optimism – the belief that technology will usher in a golden age for humanity – is in vogue once more. In 2022, a clutch of pseudonymous San Francisco artificial intelligence (AI) scenesters published a Substack post entitled "Effective Accelerationism", which argued for maximum acceleration of technological advancement. The 10-point manifesto, which proclaimed that "the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness" was imminent, quickly went viral, as did follow-up posts. Effective accelerationism, or "e/acc", exploded from being a fringe movement dedicated to pushing back against AI extinction-fearing "doomers" to being namechecked by major Silicon Valley CEOs such as Garry Tan, the CEO of start-up accelerator Y Combinator; Sam Altman, head of OpenAI; Marc Andreessen, the billionaire software engineer; and Elon Musk. In 2023, Andreessen issued his Techno-Optimist Manifesto, expanding beyond the e/acc's focus on AI to encompass all questions of technological progress.


They wanted to save us from a dark AI future. Then six people were killed

The Guardian

Years before she became the peculiar central thread linking a double homicide in Pennsylvania, the fatal shooting of a federal agent in Vermont and the murder of an elderly landlord in California, a computer programmer bought a sailboat. The programmer was known to friends, foes and followers as Ziz. She had come to the San Francisco Bay Area in 2016 as part of an influx of young people arriving to study the dangers that artificial intelligence could pose to humanity. In one of the most expensive regions of the United States, however, it is difficult to save the world when you can't make rent. So she bought a boat for 600 and moored it next to a friend's vessel in a marina. For five years, she used it as an occasional, cramped bunk. In her waking hours, she worked on a blog of provocative and increasingly extreme ideas about confrontation and retaliation. At night, she fell asleep as the boat rocked back and forth, drifting with the flotsam of greater Silicon Valley. Then, on the night of 19 August 2022, her sister and a friend reported that they saw her fall overboard. The Coast Guard and local authorities scrambled boats and aircraft. After a nearly 30-hour search, neither Ziz nor her body could be found. A newspaper in Alaska, where she was born, published a short obituary referring to her by her birth name: "Jack Amadeus LaSota left our lives but not our hearts on Aug 19 after a boating accident. Loving adventure, friends and family, music, blueberries, biking, computer games and animals, you are missed." Ziz's ideas did not die in the waters of the California coast. She had faked her drowning and gone underground, before being arrested last month in western Maryland and charged with trespassing and illegal transportation of a firearm. The targets of Ziz's ire, who include some of Silicon Valley's most prominent intellectuals, have taken security precautions. "Ziz is not stupid," someone familiar with her, who asked to remain anonymous, told me. "This is a very smart person – both smart and crazy." Ziz's writing had polarized members of a niche but influential movement of AI theorists and tech bloggers who call themselves the "rationalists". The movement is less about specific ideas than it is about an ethos – applying rigorous, mathematically informed thinking to AI, philosophy, psychology and the big questions of our time. Rationalists are odd, though often charming, people. They tend to be fantasy and sci-fi geeks, use lots of jargon and think intensely about things other people barely think about at all.


Active Alignments of Lens Systems with Reinforcement Learning

Burkhardt, Matthias, Schmähling, Tobias, Layh, Michael, Windisch, Tobias

arXiv.org Artificial Intelligence

Aligning a lens system relative to an imager is a critical challenge in camera manufacturing. While optimal alignment can be mathematically computed under ideal conditions, real-world deviations caused by manufacturing tolerances often render this approach impractical. Measuring these tolerances can be costly or even infeasible, and neglecting them may result in suboptimal alignments. We propose a reinforcement learning (RL) approach that learns exclusively in the pixel space of the sensor output, eliminating the need to develop expert-designed alignment concepts. We conduct an extensive benchmark study and show that our approach surpasses other methods in speed, precision, and robustness. We further introduce relign, a realistic, freely explorable, open-source simulation utilizing physically based rendering that models optical systems with non-deterministic manufacturing tolerances and noise in robotic alignment movement. It provides an interface to popular machine learning frameworks, enabling seamless experimentation and development. Our work highlights the potential of RL in a manufacturing environment to enhance efficiency of optical alignments while minimizing the need for manual intervention.


The feasibility of multi-graph alignment: a Bayesian approach

Vassaux, Louis, Massoulié, Laurent

arXiv.org Machine Learning

We establish thresholds for the feasibility of random multi-graph alignment in two models. In the Gaussian model, we demonstrate an "all-or-nothing" phenomenon: above a critical threshold, exact alignment is achievable with high probability, while below it, even partial alignment is statistically impossible. In the sparse Erd\H{o}s-R\'enyi model, we rigorously identify a threshold below which no meaningful partial alignment is possible and conjecture that above this threshold, partial alignment can be achieved. To prove these results, we develop a general Bayesian estimation framework over metric spaces, which provides insight into a broader class of high-dimensional statistical problems.